84 research outputs found

    Dynamic Voltage Scaling Techniques for Energy Efficient Synchronized Sensor Network Design

    Get PDF
    Building energy-efficient systems is one of the principal challenges in wireless sensor networks. Dynamic voltage scaling (DVS), a technique to reduce energy consumption by varying the CPU frequency on the fly, has been widely used in other settings to accomplish this goal. In this paper, we show that changing the CPU frequency can affect timekeeping functionality of some sensor platforms. This phenomenon can cause an unacceptable loss of time synchronization in networks that require tight synchrony over extended periods, thus preventing all existing DVS techniques from being applied. We present a method for reducing energy consumption in sensor networks via DVS, while minimizing the impact of CPU frequency switching on time synchronization. The system is implemented and evaluated on a network of 11 Imote2 sensors mounted on a truss bridge and running a high-fidelity continuous structural health monitoring application. Experimental measurements confirm that the algorithm significantly reduces network energy consumption over the same network that does not use DVS, while requiring significantly fewer re-synchronization actions than a classic DVS algorithm.unpublishedis peer reviewe

    Garbage Collection of Actors

    Get PDF
    This paper considers the garbage collection of concurrent objects for which it is necessary to know not only "reachability", the usual criterion for reclaiming data, but also the "state" (active or blocked) of the object. For the actor model, a more comprehensive definition than previously available is given for reclaimable actors. Two garbage collection algorithms, implementing a set of "coloring" rules, are presented and their computational complexity is analyzed. Extensions are briefly described to allow incremental, concurrent, distributed and real-time collection. It is argued that the techniques used for the actor model applies to other object-oriented concurrent models

    Measuring service quality and assessing its relationship to contraceptive discontinuation: A prospective cohort study in Pakistan and Uganda

    Get PDF
    Background: The quality of contraceptive counseling that women receive from their provider can influence their future contraceptive continuation. We examined (1) whether the quality of contraceptive service provision could be measured in a consistent way by using existing tools from 2 large-scale social franchises, and (2) whether facility quality measures based on these tools were consistently associated with contraceptive discontinuation.Methods: We linked existing, routinely collected facility audit data from social franchise clinics in Pakistan and Uganda with client data. Clients were women aged 15-49 who initiated a modern, reversible contraceptive method from a sampled clinic. Consented participants completed an exit interview and were contacted 3, 6, and 12 months later. We collapsed indicators into quality domains using theory-based categorization, created summative quality domain scores, and used Cox proportional hazards models to estimate the relationship between these quality domains and discontinuation while in need of contraception.Results: The 12-month all-modern method discontinuation rate was 12.5% among the 813 enrolled women in Pakistan and 5.1% among the 1,185 women in Uganda. We did not observe similar associations between facility-level quality measures and discontinuation across these 2 settings. In Pakistan, an increase in the structural privacy domain was associated with a 60% lower risk of discontinuation, adjusting for age and baseline method (PP=.005).Conclusions: We were not able to leverage existing, widely used quality measurement tools to create quality domains that were consistently associated with discontinuation in 2 study settings. Given the importance of contraceptive service quality and recent advances in indicator standardization in other areas, we recommend further effort to harmonize and simplify measurement tools to measure and improve contraceptive quality of care for all

    Virtual Machine Support for Many-Core Architectures: Decoupling Abstract from Concrete Concurrency Models

    Get PDF
    The upcoming many-core architectures require software developers to exploit concurrency to utilize available computational power. Today's high-level language virtual machines (VMs), which are a cornerstone of software development, do not provide sufficient abstraction for concurrency concepts. We analyze concrete and abstract concurrency models and identify the challenges they impose for VMs. To provide sufficient concurrency support in VMs, we propose to integrate concurrency operations into VM instruction sets. Since there will always be VMs optimized for special purposes, our goal is to develop a methodology to design instruction sets with concurrency support. Therefore, we also propose a list of trade-offs that have to be investigated to advise the design of such instruction sets. As a first experiment, we implemented one instruction set extension for shared memory and one for non-shared memory concurrency. From our experimental results, we derived a list of requirements for a full-grown experimental environment for further research

    Significant benefits of AIP testing and clinical screening in familial isolated and young-onset pituitary tumors

    Get PDF
    Context Germline mutations in the aryl hydrocarbon receptor-interacting protein (AIP) gene are responsible for a subset of familial isolated pituitary adenoma (FIPA) cases and sporadic pituitary neuroendocrine tumors (PitNETs). Objective To compare prospectively diagnosed AIP mutation-positive (AIPmut) PitNET patients with clinically presenting patients and to compare the clinical characteristics of AIPmut and AIPneg PitNET patients. Design 12-year prospective, observational study. Participants & Setting We studied probands and family members of FIPA kindreds and sporadic patients with disease onset ≤18 years or macroadenomas with onset ≤30 years (n = 1477). This was a collaborative study conducted at referral centers for pituitary diseases. Interventions & Outcome AIP testing and clinical screening for pituitary disease. Comparison of characteristics of prospectively diagnosed (n = 22) vs clinically presenting AIPmut PitNET patients (n = 145), and AIPmut (n = 167) vs AIPneg PitNET patients (n = 1310). Results Prospectively diagnosed AIPmut PitNET patients had smaller lesions with less suprasellar extension or cavernous sinus invasion and required fewer treatments with fewer operations and no radiotherapy compared with clinically presenting cases; there were fewer cases with active disease and hypopituitarism at last follow-up. When comparing AIPmut and AIPneg cases, AIPmut patients were more often males, younger, more often had GH excess, pituitary apoplexy, suprasellar extension, and more patients required multimodal therapy, including radiotherapy. AIPmut patients (n = 136) with GH excess were taller than AIPneg counterparts (n = 650). Conclusions Prospectively diagnosed AIPmut patients show better outcomes than clinically presenting cases, demonstrating the benefits of genetic and clinical screening. AIP-related pituitary disease has a wide spectrum ranging from aggressively growing lesions to stable or indolent disease course

    Concolic Testing of Multithreaded Programs and Its Application to Testing Security Protocols

    Get PDF
    Testing concurrent programs that accept data inputs is notoriously hard because, beside the large number of possible data inputs, nondeterminism results in an exponentially large number of interleavings of concurrent events. We propose a novel testing algorithm for concurrent programs in which our goal is not only to execute all reachable statements of a program, but to detect all possible data races, and deadlock states. The algorithm uses a combination of symbolic and concrete execution (called concolic execution) to explore all distinct causal structures (or partial order relations among events generated during execution) of a concurrent program. The idea of concolic testing is to use the symbolic execution to generate inputs that direct a program to alternate paths, and to use the concrete execution to guide the symbolic execution along a concrete path. Symbolic values (variables) are replaced by concrete values if the symbolic state is too complex to be handled by a constraint solver. Our algorithm uses the concrete execution to determine the exact causal structure (or the partial order relations among the events in a concurrent execution) of an execution at runtime. We use this structure to provide a novel technique for exploring only distinct causal structures of a concurrent program with complex data inputs. We describe, jCUTE, a tool implementing the testing algorithm together with the results of applying jCUTE to examples of Java code. Finally, we propose a novel framework on top of jCUTE which allows us to easily implement and analyze cryptographic and security-related protocols. Our effort has been successful in finding security bugs in two cryptographic protocols, suggesting the feasibility of this framework

    Automated Systematic Testing of Open Distributed Programs

    Get PDF
    We present an algorithm for automatic testing of distributed programs, such as Unix processes with inter-process communication and Web services. Specifically, we assume that a program consists of a number of asynchronously executing concurrent processes or actors which may take data inputs and communicate using asynchronous messages. Because of the large numbers of possible data inputs as well as the asynchrony in the execution and communication, distributed programs exhibit very large numbers of potential behaviors. Our goal is two fold: to execute all reachable statements of a program, and to detect deadlock states. Specifically, our algorithm uses simultaneous concrete and symbolic execution, or concolic execution, to explore all distinct behaviors that may result from a program's execution given different data inputs and schedules. The key idea is as follows. We use the symbolic execution to generate data inputs that may lead to alternate behaviors. At the same time, we use the concrete execution to determine, at runtime, the partial order of events in the program's execution. This enables us to improve the efficiency of our algorithm by avoiding many tests which would result in equivalent behaviors. We describe our experience with dCUTE, a prototype tool that we have developed for distributed Java programs

    Energy Bounded Scalability Analysis of Parallel Algorithms

    Get PDF
    The amount of energy available in some contexts is strictly limited. For example, in mobile computing, available energy is constrained by battery capacity. As multicore processors with a large number of processors, it will be possible to significantly vary the number and frequency of cores used in order to manage the performance and energy consumption of an algorithm. We develop a method to analyze the scalability of an algorithm given an energy budget. The resulting energy-bounded scalability analysis can be used to optimize performance of a parallel algorithm executed on a scalable multicore architecture given an energy budget. We illustrate our methodology by analyzing the behavior of four parallel algorithms on scalable multicore architectures: namely, parallel addition, two versions of parallel quicksort, and a parallel version of Prim???s Minimum Spanning Tree algorithm. We study the sensitivity of energy-bounded scalability to changes in parameters such as the ratio of the energy required for a computational operation versus the energy required for communicating a unit message. Our results shows that changing the number and frequency of cores used in a multicore architecture could significantly improve performance under fixed energy budgets.published or submitted for publicatio

    PMaude: Rewrite-based Specification Language for Probabilistic Object Systems

    Get PDF
    We introduce a rewrite-based specification language for modelling probabilistic concurrent and distributed systems. The language, based on PMaude, has both a rigorous formal basis and the characteristics of a high-level functional programming language. Furthermore, we provide tool support for performing discrete-event simulations of models written in PMaude, and for statistically verifying formal properties of such models based on the samples that are generated through discrete-event simulation. Because distributed and concurrent communication protocols can be modelled using actors (concurrent objects with asynchronous message passing), we provide an actor PMaude module. The module aids writing specifications in a probabilistic actor formalism. This allows us to easily write specifications that are purely probabilistic - and not just non-deterministic. The absence of such (un-quantified) non-determinism in a probabilistic system is necessary for a form of statistical model-checking of probabilistic temporal logic properties that we also discuss

    Link Quality Estimation for Data-Intensive Sensor Network Applications

    Get PDF
    The efficiency of multi-hop communication is a function of the time required for data transfer, or throughput. A key determinant of throughput is the reliability of packet transmission, as measured by the packet reception rate. We follow a data-driven statistical approach to dynamically determine a link quality estimate (LQE), which provides a good predictor of packet reception rates. Our goal is to enable efficient multi-hop communication for applications characterized by data-intensive, bursty communication in large sensor networks. Statistical analysis and experiments carried out on a network of 20 Imote2 sensors under a variety of environmental conditions show that the metric is a superior predictor of throughput for bursty data transfer workloads.unpublishedis peer reviewe
    • …
    corecore